# DPO alignment
Bielik 4.5B V3.0 Instruct
Apache-2.0
Bielik-4.5B-v3-Instruct is a 4.6 billion parameter Polish generative text model, fine-tuned based on Bielik-4.5B-v3, demonstrating exceptional Polish language comprehension and processing capabilities.
Large Language Model
Transformers Other

B
speakleash
1,121
13
Maestrale Chat V0.4 Beta
Italian chat model based on Mistral-7b, pre-trained and fine-tuned on a large Italian corpus
Large Language Model
Transformers Other

M
mii-llm
6,555
8
Llama 3 SauerkrautLM 8b Instruct
Other
Llama-3-SauerkrautLM-8b-Instruct is an improved version based on Meta-Llama-3-8B-Instruct jointly developed by VAGO Solutions and Hyperspace.ai. It is optimized through DPO alignment and supports German and English.
Large Language Model
Transformers Supports Multiple Languages

L
VAGOsolutions
20.01k
54
Featured Recommended AI Models